402 research outputs found

    Multiscale Bayesian State Space Model for Granger Causality Analysis of Brain Signal

    Full text link
    Modelling time-varying and frequency-specific relationships between two brain signals is becoming an essential methodological tool to answer heoretical questions in experimental neuroscience. In this article, we propose to estimate a frequency Granger causality statistic that may vary in time in order to evaluate the functional connections between two brain regions during a task. We use for that purpose an adaptive Kalman filter type of estimator of a linear Gaussian vector autoregressive model with coefficients evolving over time. The estimation procedure is achieved through variational Bayesian approximation and is extended for multiple trials. This Bayesian State Space (BSS) model provides a dynamical Granger-causality statistic that is quite natural. We propose to extend the BSS model to include the \`{a} trous Haar decomposition. This wavelet-based forecasting method is based on a multiscale resolution decomposition of the signal using the redundant \`{a} trous wavelet transform and allows us to capture short- and long-range dependencies between signals. Equally importantly it allows us to derive the desired dynamical and frequency-specific Granger-causality statistic. The application of these models to intracranial local field potential data recorded during a psychological experimental task shows the complex frequency based cross-talk between amygdala and medial orbito-frontal cortex. Keywords: \`{a} trous Haar wavelets; Multiple trials; Neuroscience data; Nonstationarity; Time-frequency; Variational methods The published version of this article is Cekic, S., Grandjean, D., Renaud, O. (2018). Multiscale Bayesian state-space model for Granger causality analysis of brain signal. Journal of Applied Statistics. https://doi.org/10.1080/02664763.2018.145581

    A Gradient Inequality at Infinity for Tame Functions

    Get PDF
    Let f be a C1 function defined over Rn and definable in a given o-minimal structure M expanding the real field. We prove here a gradient-like inequality at infinity in a neighborhood of an asymptotic critical value c. When f is C2 we use this inequality to discuss the trivialization by the gradient flow of f in a neighborhood of a regular asymptotic critical level.Let f be a C1 function defined over Rn and definable in a given o-minimal structure M expanding the real field. We prove here a gradient-like inequality at infinity in a neighborhood of an asymptotic critical value c. When f is C2 we use this inequality to discuss the trivialization by the gradient flow of f in a neighborhood of a regular asymptotic critical level

    Motor Commands of Facial Expressions: The Bereitschaftspotential of Posed Smiles

    Get PDF
    Electroencephalographic (EEG) premotor potentials with negative polarity like the Bereitschaftspotential (BP) are known to precede self-paced voluntary movements of the limbs and other body parts. This is however the first report of such premotor potentials before posed smiles. Scalp EEG was recorded in 16 healthy participants performing self-paced unilateral and bilateral smiles and unilateral finger movements. Amplitudes over six central electrodes and voltage distributions over the entire scalp were compared across conditions at time of EMG-onset, thus focusing on the late BP. Results show the presence of a premotor potential before posed smiles with a later onset, symmetrical bilateral distribution, and smaller amplitude at time of movement-onset, compared to finger movements. Future studies should investigate the BP before various types of emotional and non-emotional facial expression

    Specific Brain Networks during Explicit and Implicit Decoding of Emotional Prosody

    Get PDF
    To better define the underlying brain network for the decoding of emotional prosody, we recorded high-resolution brain scans during an implicit and explicit decoding task of angry and neutral prosody. Several subregions in the right superior temporal gyrus (STG) and bilateral in the inferior frontal gyrus (IFG) were sensitive to emotional prosody. Implicit processing of emotional prosody engaged regions in the posterior superior temporal gyrus (pSTG) and bilateral IFG subregions, whereas explicit processing relied more on mid STG, left IFG, amygdala, and subgenual anterior cingulate cortex. Furthermore, whereas some bilateral pSTG regions and the amygdala showed general sensitivity to prosody-specific acoustical features during implicit processing, activity in inferior frontal brain regions was insensitive to these features. Together, the data suggest a differentiated STG, IFG, and subcortical network of brain regions, which varies with the levels of processing and shows a higher specificity during explicit decoding of emotional prosod

    Reappraising the voices of wrath

    Get PDF
    Cognitive reappraisal recruits prefrontal and parietal cortical areas. Because of the near exclusive usage in past research of visual stimuli to elicit emotions, it is unknown whether the same neural substrates underlie the reappraisal of emotions induced through other sensory modalities. Here, participants reappraised their emotions in order to increase or decrease their emotional response to angry prosody, or maintained their attention to it in a control condition. Neural activity was monitored with fMRI, and connectivity was investigated by using psychophysiological interaction analyses. A right-sided network encompassing the superior temporal gyrus, the superior temporal sulcus and the inferior frontal gyrus was found to underlie the processing of angry prosody. During reappraisal to increase emotional response, the left superior frontal gyrus showed increased activity and became functionally coupled to right auditory cortices. During reappraisal to decrease emotional response, a network that included the medial frontal gyrus and posterior parietal areas showed increased activation and greater functional connectivity with bilateral auditory regions. Activations pertaining to this network were more extended on the right side of the brain. Although directionality cannot be inferred from PPI analyses, the findings suggest a similar frontoparietal network for the reappraisal of visually and auditorily induced negative emotion

    Electrophysiological Correlates of Rapid Spatial Orienting Towards Fearful Faces

    Get PDF
    We investigated the spatio-temporal dynamic of attentional bias towards fearful faces. Twelve participants performed a covert spatial orienting task while recording visual event-related brain potentials (VEPs). Each trial consisted of a pair of faces (one emotional and one neutral) briefly presented in the upper visual field, followed by a unilateral bar presented at the location of one of the faces. Participants had to judge the orientation of the bar. Comparing VEPs to bars shown at the location of an emotional (valid) versus neutral (invalid) face revealed an early effect of spatial validity: the lateral occipital P1 component (~130 ms post-stimulus) was selectively increased when a bar replaced a fearful face compared to when the same bar replaced a neutral face. This effect was not found with upright happy faces or inverted fearful faces. A similar amplification of P1 has previously been observed in electrophysiological studies of spatial attention using non-emotional cues. In a behavioural control experiment, participants were also better at discriminating the orientation of the bar when it replaced a fearful rather than a neutral face. In addition, VEPs time-locked to the face-pair onset revealed a C1 component (~90 ms) that was greater for fearful than happy faces. Source localization (LORETA) confirmed an extrastriate origin of the P1 response showing a spatial validity effect, and a striate origin of the C1 response showing an emotional valence effect. These data suggest that activity in primary visual cortex might be enhanced by fear cues as early as 90 ms post-stimulus, and that such effects might result in a subsequent facilitation of sensory processing for a stimulus appearing at the same location. These results provide evidence for neural mechanisms allowing rapid, exogenous spatial orienting of attention towards fear stimul

    Music ensemble as a resilient system. Managing the unexpected through group interaction

    Get PDF
    The present contribution provides readers from diverse fields of psychology with a new and comprehensive model for the understanding of the characteristics of music ensembles. The model is based on a novel heuristic approach whose key construct is resilience, intended here as the ability of a system to adapt to external perturbations and anticipate future events. The paper clarifies the specificity of music ensemble as an original social and creative activity, and how some mechanisms, at an individual (cognitive) and group (coordination) level, are enacted in a particular way that endows these groups with exceptional capacity for resilience. There is now a wealth of evidence isolating the psychological mechanisms involved in these processes. However, there is much less focus on conditions in which the group has to face unexpected and potentially performance-disruptive events. The resilience approach offers a more thorough explanation of the regulatory strategies that musicians may resort to in order to maintain their performance at an optimal level. Music ensembles of different size are presented as case studies of how such systems (and their individual members) resist error and maintain joint performance. Three hypothetical scenarios are further proposed that epitomize resilient or non-resilient musical teams. The present contribution further proposes hypotheses and formulates predictions on which combinations of individual and group factors foster team resilience. This model further accommodates the most recent findings in neuroscience and experimental psychology. Besides highlighting the potential of music ensemble for psychological research, it offers hints about how resilience could be trained

    Facial emotion recognition in Parkinson's disease A review and new hypotheses

    Get PDF
    International audienceParkinson's disease is a neurodegenerative disorder classically characterized by motor symptoms. Among them, hypomimia affects facial expressiveness and social communication and has a highly negative impact on patients' and relatives' quality of life. Patients also frequently experience nonmotor symptoms, including emotional-processing impairments, leading to difficulty in recognizing emotions from faces. Aside from its theoretical importance, understanding the disruption of facial emotion recognition in PD is crucial for improving quality of life for both patients and caregivers, as this impairment is associated with heightened interpersonal difficulties. However, studies assessing abilities in recognizing facial emotions in PD still report contradictory outcomes. The origins of this inconsistency are unclear, and several questions (regarding the role of dopamine replacement therapy or the possible consequences of hypomimia) remain unanswered. We therefore undertook a fresh review of relevant articles focusing on facial emotion recognition in PD to deepen current understanding of this nonmotor feature, exploring multiple significant potential confounding factors, both clinical and methodological, and discussing probable pathophysiological mechanisms. This led us to examine recent proposals about the role of basal ganglia-based circuits in emotion and to consider the involvement of facial mimicry in this deficit from the perspective of embodied simulation theory. We believe our findings will inform clinical practice and increase fundamental knowledge, particularly in relation to potential embodied emotion impairment in PD. © 2018 The Authors. Movement Disorders published by Wiley Periodicals, Inc. on behalf of International Parkinson and Movement Disorder Society

    Talking in Fury: The Cortico-Subcortical Network Underlying Angry Vocalizations

    Get PDF
    Although the neural basis for the perception of vocal emotions has been described extensively, the neural basis for the expression of vocal emotions is almost unknown. Here, we asked participants both to repeat and to express high-arousing angry vocalizations to command (i.e., evoked expressions). First, repeated expressions elicited activity in the left middle superior temporal gyrus (STG), pointing to a short auditory memory trace for the repetition of vocal expressions. Evoked expressions activated the left hippocampus, suggesting the retrieval of long-term stored scripts. Secondly, angry compared with neutral expressions elicited activity in the inferior frontal cortex IFC and the dorsal basal ganglia (BG), specifically during evoked expressions. Angry expressions also activated the amygdala and anterior cingulate cortex (ACC), and the latter correlated with pupil size as an indicator of bodily arousal during emotional output behavior. Though uncorrelated, both ACC activity and pupil diameter were also increased during repetition trials indicating increased control demands during the more constraint production type of precisely repeating prosodic intonations. Finally, different acoustic measures of angry expressions were associated with activity in the left STG, bilateral inferior frontal gyrus, and dorsal B
    • …
    corecore